The human object recognition system has about a trillion synapses and computer vision systems will probably need to learn a similar number of parameters in order to be competitive. This makes it necessary to learn useful feature detectors from unlabeled examples (as the cortex does). I will describe how this can be done and will illustrate the approach with two multi-layer neural networks that perform object recognition on two very different databases. The NORB database has stereo, gray-level images of 5 different object classes with a wide range of viewpoints and lighting conditions. The TINY IMAGES database has 80 million small color images from about 50,000 classes harvested from the web. In both databases the number of labeled examples is far less than a million. Both systems learn about 100 million parameters in a few GPU days and have state-of-the-art performance. I will also show that the same approach works even if the labels are so unreliable that they are mostly wrong. The neural networks I will describe were recently developed by Vinod Nair, Alex Krizhevsky and Marc'Aurelio Ranzato.